2026-02-18 23:51:58,269 INFO mapreduce.MiniHadoopClusterManager: Updated 0 configuration settings from command line. 2026-02-18 23:51:58,309 INFO hdfs.MiniDFSCluster: starting cluster: numNameNodes=1, numDataNodes=1 2026-02-18 23:51:58,604 INFO namenode.NameNode: Formatting using clusterid: testClusterID 2026-02-18 23:51:58,614 INFO namenode.FSEditLog: Edit logging is async:true 2026-02-18 23:51:58,628 INFO namenode.FSNamesystem: KeyProvider: null 2026-02-18 23:51:58,628 INFO namenode.FSNamesystem: fsLock is fair: true 2026-02-18 23:51:58,629 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2026-02-18 23:51:58,647 INFO namenode.FSNamesystem: fsOwner = clickhouse (auth:SIMPLE) 2026-02-18 23:51:58,647 INFO namenode.FSNamesystem: supergroup = supergroup 2026-02-18 23:51:58,647 INFO namenode.FSNamesystem: isPermissionEnabled = true 2026-02-18 23:51:58,647 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true 2026-02-18 23:51:58,647 INFO namenode.FSNamesystem: HA Enabled: false 2026-02-18 23:51:58,671 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2026-02-18 23:51:58,673 INFO Configuration.deprecation: hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping 2026-02-18 23:51:58,673 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 2026-02-18 23:51:58,673 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 2026-02-18 23:51:58,676 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2026-02-18 23:51:58,676 INFO blockmanagement.BlockManager: The block deletion will start around 2026 Feb 18 23:51:58 2026-02-18 23:51:58,677 INFO util.GSet: Computing capacity for map BlocksMap 2026-02-18 23:51:58,677 INFO util.GSet: VM type = 64-bit 2026-02-18 23:51:58,678 INFO util.GSet: 2.0% max memory 7.7 GB = 156.7 MB 2026-02-18 23:51:58,678 INFO util.GSet: capacity = 2^24 = 16777216 entries 2026-02-18 23:51:58,710 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled 2026-02-18 23:51:58,710 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false 2026-02-18 23:51:58,714 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999 2026-02-18 23:51:58,714 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 2026-02-18 23:51:58,714 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 0 2026-02-18 23:51:58,714 INFO blockmanagement.BlockManager: defaultReplication = 1 2026-02-18 23:51:58,714 INFO blockmanagement.BlockManager: maxReplication = 512 2026-02-18 23:51:58,714 INFO blockmanagement.BlockManager: minReplication = 1 2026-02-18 23:51:58,715 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 2026-02-18 23:51:58,715 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 2026-02-18 23:51:58,715 INFO blockmanagement.BlockManager: encryptDataTransfer = false 2026-02-18 23:51:58,715 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2026-02-18 23:51:58,727 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911 2026-02-18 23:51:58,727 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215 2026-02-18 23:51:58,727 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215 2026-02-18 23:51:58,727 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215 2026-02-18 23:51:58,736 INFO util.GSet: Computing capacity for map INodeMap 2026-02-18 23:51:58,736 INFO util.GSet: VM type = 64-bit 2026-02-18 23:51:58,737 INFO util.GSet: 1.0% max memory 7.7 GB = 78.3 MB 2026-02-18 23:51:58,737 INFO util.GSet: capacity = 2^23 = 8388608 entries 2026-02-18 23:51:58,755 INFO namenode.FSDirectory: ACLs enabled? true 2026-02-18 23:51:58,755 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 2026-02-18 23:51:58,755 INFO namenode.FSDirectory: XAttrs enabled? true 2026-02-18 23:51:58,755 INFO namenode.NameNode: Caching file names occurring more than 10 times 2026-02-18 23:51:58,764 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 2026-02-18 23:51:58,765 INFO snapshot.SnapshotManager: SkipList is disabled 2026-02-18 23:51:58,768 INFO util.GSet: Computing capacity for map cachedBlocks 2026-02-18 23:51:58,768 INFO util.GSet: VM type = 64-bit 2026-02-18 23:51:58,768 INFO util.GSet: 0.25% max memory 7.7 GB = 19.6 MB 2026-02-18 23:51:58,768 INFO util.GSet: capacity = 2^21 = 2097152 entries 2026-02-18 23:51:58,777 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2026-02-18 23:51:58,777 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2026-02-18 23:51:58,777 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2026-02-18 23:51:58,779 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 2026-02-18 23:51:58,780 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2026-02-18 23:51:58,780 INFO util.GSet: Computing capacity for map NameNodeRetryCache 2026-02-18 23:51:58,780 INFO util.GSet: VM type = 64-bit 2026-02-18 23:51:58,781 INFO util.GSet: 0.029999999329447746% max memory 7.7 GB = 2.4 MB 2026-02-18 23:51:58,781 INFO util.GSet: capacity = 2^18 = 262144 entries 2026-02-18 23:51:58,796 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1405556209-172.17.0.2-1771411918790 2026-02-18 23:51:58,808 INFO common.Storage: Storage directory /hadoop-3.3.1/target/test/data/dfs/name-0-1 has been successfully formatted. 2026-02-18 23:51:58,811 INFO common.Storage: Storage directory /hadoop-3.3.1/target/test/data/dfs/name-0-2 has been successfully formatted. 2026-02-18 23:51:58,844 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop-3.3.1/target/test/data/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression 2026-02-18 23:51:58,855 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop-3.3.1/target/test/data/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression 2026-02-18 23:51:58,931 INFO namenode.FSImageFormatProtobuf: Image file /hadoop-3.3.1/target/test/data/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 405 bytes saved in 0 seconds . 2026-02-18 23:51:58,931 INFO namenode.FSImageFormatProtobuf: Image file /hadoop-3.3.1/target/test/data/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 405 bytes saved in 0 seconds . 2026-02-18 23:51:58,944 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 2026-02-18 23:51:58,980 INFO namenode.FSNamesystem: Stopping services started for active state 2026-02-18 23:51:58,980 INFO namenode.FSNamesystem: Stopping services started for standby state 2026-02-18 23:51:58,981 INFO namenode.NameNode: createNameNode [] 2026-02-18 23:51:59,014 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2026-02-18 23:51:59,066 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2026-02-18 23:51:59,066 INFO impl.MetricsSystemImpl: NameNode metrics system started 2026-02-18 23:51:59,086 INFO namenode.NameNodeUtils: fs.defaultFS is hdfs://127.0.0.1:12222 2026-02-18 23:51:59,086 INFO namenode.NameNode: Clients should use 127.0.0.1:12222 to access this namenode/service. 2026-02-18 23:51:59,114 INFO util.JvmPauseMonitor: Starting JVM pause monitor 2026-02-18 23:51:59,119 INFO hdfs.DFSUtil: Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer 2026-02-18 23:51:59,121 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://localhost:0 2026-02-18 23:51:59,128 INFO util.log: Logging initialized @1294ms to org.eclipse.jetty.util.log.Slf4jLog 2026-02-18 23:51:59,171 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2026-02-18 23:51:59,173 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2026-02-18 23:51:59,177 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2026-02-18 23:51:59,178 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 2026-02-18 23:51:59,178 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2026-02-18 23:51:59,178 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2026-02-18 23:51:59,179 INFO http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs 2026-02-18 23:51:59,179 INFO http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static 2026-02-18 23:51:59,179 INFO http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context logs 2026-02-18 23:51:59,195 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2026-02-18 23:51:59,199 INFO http.HttpServer2: Jetty bound to port 44165 2026-02-18 23:51:59,200 INFO server.Server: jetty-9.4.40.v20210413; built: 2021-04-13T20:42:42.668Z; git: b881a572662e1943a14ae12e7e1207989f218b74; jvm 11.0.25+9-post-Ubuntu-1ubuntu122.04 2026-02-18 23:51:59,215 INFO server.session: DefaultSessionIdManager workerName=node0 2026-02-18 23:51:59,215 INFO server.session: No SessionScavenger set, using defaults 2026-02-18 23:51:59,216 INFO server.session: node0 Scavenging every 660000ms 2026-02-18 23:51:59,224 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2026-02-18 23:51:59,226 INFO handler.ContextHandler: Started o.e.j.s.ServletContextHandler@4f8caaf3{logs,/logs,file:///hadoop-3.3.1/logs,AVAILABLE} 2026-02-18 23:51:59,226 INFO handler.ContextHandler: Started o.e.j.s.ServletContextHandler@64a1923a{static,/static,file:///hadoop-3.3.1/share/hadoop/hdfs/webapps/static/,AVAILABLE} 2026-02-18 23:51:59,274 INFO handler.ContextHandler: Started o.e.j.w.WebAppContext@5972d253{hdfs,/,file:///hadoop-3.3.1/share/hadoop/hdfs/webapps/hdfs/,AVAILABLE}{file:/hadoop-3.3.1/share/hadoop/hdfs/webapps/hdfs} 2026-02-18 23:51:59,285 INFO server.AbstractConnector: Started ServerConnector@109f5dd8{HTTP/1.1, (http/1.1)}{localhost:44165} 2026-02-18 23:51:59,285 INFO server.Server: Started @1452ms 2026-02-18 23:51:59,291 INFO namenode.FSEditLog: Edit logging is async:true 2026-02-18 23:51:59,311 INFO namenode.FSNamesystem: KeyProvider: null 2026-02-18 23:51:59,311 INFO namenode.FSNamesystem: fsLock is fair: true 2026-02-18 23:51:59,311 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2026-02-18 23:51:59,311 INFO namenode.FSNamesystem: fsOwner = clickhouse (auth:SIMPLE) 2026-02-18 23:51:59,311 INFO namenode.FSNamesystem: supergroup = supergroup 2026-02-18 23:51:59,311 INFO namenode.FSNamesystem: isPermissionEnabled = true 2026-02-18 23:51:59,311 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true 2026-02-18 23:51:59,312 INFO namenode.FSNamesystem: HA Enabled: false 2026-02-18 23:51:59,312 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2026-02-18 23:51:59,312 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 2026-02-18 23:51:59,312 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 2026-02-18 23:51:59,313 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2026-02-18 23:51:59,313 INFO blockmanagement.BlockManager: The block deletion will start around 2026 Feb 18 23:51:59 2026-02-18 23:51:59,313 INFO util.GSet: Computing capacity for map BlocksMap 2026-02-18 23:51:59,313 INFO util.GSet: VM type = 64-bit 2026-02-18 23:51:59,313 INFO util.GSet: 2.0% max memory 7.7 GB = 156.7 MB 2026-02-18 23:51:59,313 INFO util.GSet: capacity = 2^24 = 16777216 entries 2026-02-18 23:51:59,315 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 0 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManager: defaultReplication = 1 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManager: maxReplication = 512 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManager: minReplication = 1 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManager: encryptDataTransfer = false 2026-02-18 23:51:59,316 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2026-02-18 23:51:59,316 INFO util.GSet: Computing capacity for map INodeMap 2026-02-18 23:51:59,316 INFO util.GSet: VM type = 64-bit 2026-02-18 23:51:59,317 INFO util.GSet: 1.0% max memory 7.7 GB = 78.3 MB 2026-02-18 23:51:59,317 INFO util.GSet: capacity = 2^23 = 8388608 entries 2026-02-18 23:51:59,332 INFO namenode.FSDirectory: ACLs enabled? true 2026-02-18 23:51:59,333 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 2026-02-18 23:51:59,333 INFO namenode.FSDirectory: XAttrs enabled? true 2026-02-18 23:51:59,333 INFO namenode.NameNode: Caching file names occurring more than 10 times 2026-02-18 23:51:59,333 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 2026-02-18 23:51:59,333 INFO snapshot.SnapshotManager: SkipList is disabled 2026-02-18 23:51:59,333 INFO util.GSet: Computing capacity for map cachedBlocks 2026-02-18 23:51:59,333 INFO util.GSet: VM type = 64-bit 2026-02-18 23:51:59,333 INFO util.GSet: 0.25% max memory 7.7 GB = 19.6 MB 2026-02-18 23:51:59,333 INFO util.GSet: capacity = 2^21 = 2097152 entries 2026-02-18 23:51:59,337 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2026-02-18 23:51:59,337 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2026-02-18 23:51:59,337 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2026-02-18 23:51:59,337 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 2026-02-18 23:51:59,337 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2026-02-18 23:51:59,337 INFO util.GSet: Computing capacity for map NameNodeRetryCache 2026-02-18 23:51:59,337 INFO util.GSet: VM type = 64-bit 2026-02-18 23:51:59,337 INFO util.GSet: 0.029999999329447746% max memory 7.7 GB = 2.4 MB 2026-02-18 23:51:59,337 INFO util.GSet: capacity = 2^18 = 262144 entries 2026-02-18 23:51:59,344 INFO common.Storage: Lock on /hadoop-3.3.1/target/test/data/dfs/name-0-1/in_use.lock acquired by nodename 423@aad40c9f0bbc 2026-02-18 23:51:59,347 INFO common.Storage: Lock on /hadoop-3.3.1/target/test/data/dfs/name-0-2/in_use.lock acquired by nodename 423@aad40c9f0bbc 2026-02-18 23:51:59,348 INFO namenode.FileJournalManager: Recovering unfinalized segments in /hadoop-3.3.1/target/test/data/dfs/name-0-1/current 2026-02-18 23:51:59,348 INFO namenode.FileJournalManager: Recovering unfinalized segments in /hadoop-3.3.1/target/test/data/dfs/name-0-2/current 2026-02-18 23:51:59,348 INFO namenode.FSImage: No edit log streams selected. 2026-02-18 23:51:59,348 INFO namenode.FSImage: Planning to load image: FSImageFile(file=/hadoop-3.3.1/target/test/data/dfs/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000) 2026-02-18 23:51:59,360 INFO namenode.FSImageFormatPBINode: Loading 1 INodes. 2026-02-18 23:51:59,360 INFO namenode.FSImageFormatPBINode: Successfully loaded 1 inodes 2026-02-18 23:51:59,363 INFO namenode.FSImageFormatPBINode: Completed update blocks map and name cache, total waiting duration 0ms. 2026-02-18 23:51:59,364 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds. 2026-02-18 23:51:59,364 INFO namenode.FSImage: Loaded image for txid 0 from /hadoop-3.3.1/target/test/data/dfs/name-0-1/current/fsimage_0000000000000000000 2026-02-18 23:51:59,366 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false) 2026-02-18 23:51:59,367 INFO namenode.FSEditLog: Starting log segment at 1 2026-02-18 23:51:59,378 INFO namenode.NameCache: initialized with 0 entries 0 lookups 2026-02-18 23:51:59,378 INFO namenode.FSNamesystem: Finished loading FSImage in 40 msecs 2026-02-18 23:51:59,463 INFO namenode.NameNode: RPC server is binding to localhost:12222 2026-02-18 23:51:59,463 INFO namenode.NameNode: Enable NameNode state context:false 2026-02-18 23:51:59,467 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false. 2026-02-18 23:51:59,474 INFO ipc.Server: Starting Socket Reader #1 for port 12222 2026-02-18 23:51:59,621 INFO namenode.FSNamesystem: Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans. 2026-02-18 23:51:59,635 INFO namenode.LeaseManager: Number of blocks under construction: 0 2026-02-18 23:51:59,639 INFO blockmanagement.DatanodeAdminDefaultMonitor: Initialized the Default Decommission and Maintenance monitor 2026-02-18 23:51:59,641 INFO blockmanagement.BlockManager: initializing replication queues 2026-02-18 23:51:59,642 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs 2026-02-18 23:51:59,642 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 2026-02-18 23:51:59,642 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2026-02-18 23:51:59,653 INFO blockmanagement.BlockManager: Total number of blocks = 0 2026-02-18 23:51:59,654 INFO blockmanagement.BlockManager: Number of invalid blocks = 0 2026-02-18 23:51:59,654 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 0 2026-02-18 23:51:59,654 INFO blockmanagement.BlockManager: Number of over-replicated blocks = 0 2026-02-18 23:51:59,654 INFO blockmanagement.BlockManager: Number of blocks being written = 0 2026-02-18 23:51:59,654 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 12 msec 2026-02-18 23:51:59,664 INFO ipc.Server: IPC Server listener on 12222: starting 2026-02-18 23:51:59,664 INFO ipc.Server: IPC Server Responder: starting 2026-02-18 23:51:59,669 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:12222 2026-02-18 23:51:59,672 INFO namenode.FSNamesystem: Starting services required for active state 2026-02-18 23:51:59,672 INFO namenode.FSDirectory: Initializing quota with 12 thread(s) 2026-02-18 23:51:59,675 INFO namenode.FSDirectory: Quota initialization completed in 3 milliseconds name space=1 storage space=0 storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0 2026-02-18 23:51:59,681 INFO blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds 2026-02-18 23:51:59,690 INFO hdfs.MiniDFSCluster: Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data1,[DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data2 2026-02-18 23:51:59,727 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data1 2026-02-18 23:51:59,733 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data2 2026-02-18 23:51:59,749 INFO impl.MetricsSystemImpl: DataNode metrics system started (again) 2026-02-18 23:51:59,755 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2026-02-18 23:51:59,760 INFO datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576 2026-02-18 23:51:59,765 INFO datanode.DataNode: Configured hostname is 127.0.0.1 2026-02-18 23:51:59,766 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2026-02-18 23:51:59,771 INFO datanode.DataNode: Starting DataNode with maxLockedMemory = 0 2026-02-18 23:51:59,777 INFO datanode.DataNode: Opened streaming server at /127.0.0.1:37107 2026-02-18 23:51:59,779 INFO datanode.DataNode: Balancing bandwidth is 104857600 bytes/s 2026-02-18 23:51:59,779 INFO datanode.DataNode: Number threads for balancing is 100 2026-02-18 23:51:59,810 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2026-02-18 23:51:59,811 INFO http.HttpRequestLog: Http request log for http.requests.datanode is not defined 2026-02-18 23:51:59,814 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2026-02-18 23:51:59,815 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2026-02-18 23:51:59,816 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2026-02-18 23:51:59,816 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2026-02-18 23:51:59,818 INFO http.HttpServer2: Jetty bound to port 41223 2026-02-18 23:51:59,818 INFO server.Server: jetty-9.4.40.v20210413; built: 2021-04-13T20:42:42.668Z; git: b881a572662e1943a14ae12e7e1207989f218b74; jvm 11.0.25+9-post-Ubuntu-1ubuntu122.04 2026-02-18 23:51:59,819 INFO server.session: DefaultSessionIdManager workerName=node0 2026-02-18 23:51:59,819 INFO server.session: No SessionScavenger set, using defaults 2026-02-18 23:51:59,819 INFO server.session: node0 Scavenging every 600000ms 2026-02-18 23:51:59,820 INFO handler.ContextHandler: Started o.e.j.s.ServletContextHandler@2c05ff9d{logs,/logs,file:///hadoop-3.3.1/logs,AVAILABLE} 2026-02-18 23:51:59,820 INFO handler.ContextHandler: Started o.e.j.s.ServletContextHandler@2e1ddc90{static,/static,file:///hadoop-3.3.1/share/hadoop/hdfs/webapps/static/,AVAILABLE} 2026-02-18 23:51:59,827 INFO handler.ContextHandler: Started o.e.j.w.WebAppContext@22752544{datanode,/,file:///hadoop-3.3.1/share/hadoop/hdfs/webapps/datanode/,AVAILABLE}{file:/hadoop-3.3.1/share/hadoop/hdfs/webapps/datanode} 2026-02-18 23:51:59,828 INFO server.AbstractConnector: Started ServerConnector@d5af0a5{HTTP/1.1, (http/1.1)}{localhost:41223} 2026-02-18 23:51:59,828 INFO server.Server: Started @1994ms 2026-02-18 23:51:59,945 WARN web.DatanodeHttpServer: Got null for restCsrfPreventionFilter - will not do any filtering. 2026-02-18 23:51:59,987 INFO web.DatanodeHttpServer: Listening HTTP traffic on /127.0.0.1:42093 2026-02-18 23:51:59,987 INFO util.JvmPauseMonitor: Starting JVM pause monitor 2026-02-18 23:51:59,988 INFO datanode.DataNode: dnUserName = clickhouse 2026-02-18 23:51:59,988 INFO datanode.DataNode: supergroup = supergroup 2026-02-18 23:51:59,996 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false. 2026-02-18 23:51:59,997 INFO ipc.Server: Starting Socket Reader #1 for port 0 2026-02-18 23:52:00,000 INFO datanode.DataNode: Opened IPC server at /127.0.0.1:38627 2026-02-18 23:52:00,016 INFO datanode.DataNode: Refresh request received for nameservices: null 2026-02-18 23:52:00,017 INFO datanode.DataNode: Starting BPOfferServices for nameservices: 2026-02-18 23:52:00,023 INFO datanode.DataNode: Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:12222 starting to offer service 2026-02-18 23:52:00,027 INFO ipc.Server: IPC Server Responder: starting 2026-02-18 23:52:00,027 INFO ipc.Server: IPC Server listener on 0: starting 2026-02-18 23:52:00,197 INFO datanode.DataNode: Acknowledging ACTIVE Namenode during handshakeBlock pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:12222 2026-02-18 23:52:00,200 INFO common.Storage: Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2) 2026-02-18 23:52:00,203 INFO common.Storage: Lock on /hadoop-3.3.1/target/test/data/dfs/data/data1/in_use.lock acquired by nodename 423@aad40c9f0bbc 2026-02-18 23:52:00,203 INFO common.Storage: Storage directory with location [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data1 is not formatted for namespace 753147953. Formatting... 2026-02-18 23:52:00,204 INFO common.Storage: Generated new storageID DS-d58a5194-b16f-4543-8266-2d369e6e46e3 for directory /hadoop-3.3.1/target/test/data/dfs/data/data1 2026-02-18 23:52:00,210 INFO common.Storage: Lock on /hadoop-3.3.1/target/test/data/dfs/data/data2/in_use.lock acquired by nodename 423@aad40c9f0bbc 2026-02-18 23:52:00,210 INFO common.Storage: Storage directory with location [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data2 is not formatted for namespace 753147953. Formatting... 2026-02-18 23:52:00,211 INFO common.Storage: Generated new storageID DS-2744ccfa-08be-4ed1-bcd2-e37a56aa1c94 for directory /hadoop-3.3.1/target/test/data/dfs/data/data2 2026-02-18 23:52:00,238 INFO common.Storage: Analyzing storage directories for bpid BP-1405556209-172.17.0.2-1771411918790 2026-02-18 23:52:00,238 INFO common.Storage: Locking is disabled for /hadoop-3.3.1/target/test/data/dfs/data/data1/current/BP-1405556209-172.17.0.2-1771411918790 2026-02-18 23:52:00,239 INFO common.Storage: Block pool storage directory for location [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data1 and block pool id BP-1405556209-172.17.0.2-1771411918790 is not formatted. Formatting ... 2026-02-18 23:52:00,239 INFO common.Storage: Formatting block pool BP-1405556209-172.17.0.2-1771411918790 directory /hadoop-3.3.1/target/test/data/dfs/data/data1/current/BP-1405556209-172.17.0.2-1771411918790/current 2026-02-18 23:52:00,261 INFO common.Storage: Analyzing storage directories for bpid BP-1405556209-172.17.0.2-1771411918790 2026-02-18 23:52:00,261 INFO common.Storage: Locking is disabled for /hadoop-3.3.1/target/test/data/dfs/data/data2/current/BP-1405556209-172.17.0.2-1771411918790 2026-02-18 23:52:00,261 INFO common.Storage: Block pool storage directory for location [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data2 and block pool id BP-1405556209-172.17.0.2-1771411918790 is not formatted. Formatting ... 2026-02-18 23:52:00,262 INFO common.Storage: Formatting block pool BP-1405556209-172.17.0.2-1771411918790 directory /hadoop-3.3.1/target/test/data/dfs/data/data2/current/BP-1405556209-172.17.0.2-1771411918790/current 2026-02-18 23:52:00,266 INFO datanode.DataNode: Setting up storage: nsid=753147953;bpid=BP-1405556209-172.17.0.2-1771411918790;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=753147953;c=1771411918790;bpid=BP-1405556209-172.17.0.2-1771411918790;dnuuid=null 2026-02-18 23:52:00,269 INFO datanode.DataNode: Generated and persisted new Datanode UUID 96ea874a-e6d1-43e4-9890-1af139d74941 2026-02-18 23:52:00,279 INFO hdfs.MiniDFSCluster: dnInfo.length != numDataNodes 2026-02-18 23:52:00,279 INFO hdfs.MiniDFSCluster: Waiting for cluster to become active 2026-02-18 23:52:00,283 INFO impl.FsDatasetImpl: The datanode lock is a read write lock 2026-02-18 23:52:00,328 INFO impl.FsDatasetImpl: Added new volume: DS-d58a5194-b16f-4543-8266-2d369e6e46e3 2026-02-18 23:52:00,328 INFO impl.FsDatasetImpl: Added volume - [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data1, StorageType: DISK 2026-02-18 23:52:00,330 INFO impl.FsDatasetImpl: Added new volume: DS-2744ccfa-08be-4ed1-bcd2-e37a56aa1c94 2026-02-18 23:52:00,330 INFO impl.FsDatasetImpl: Added volume - [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data2, StorageType: DISK 2026-02-18 23:52:00,333 INFO impl.MemoryMappableBlockLoader: Initializing cache loader: MemoryMappableBlockLoader. 2026-02-18 23:52:00,335 INFO impl.FsDatasetImpl: Registered FSDatasetState MBean 2026-02-18 23:52:00,337 INFO impl.FsDatasetImpl: Adding block pool BP-1405556209-172.17.0.2-1771411918790 2026-02-18 23:52:00,338 INFO impl.FsDatasetImpl: Scanning block pool BP-1405556209-172.17.0.2-1771411918790 on volume /hadoop-3.3.1/target/test/data/dfs/data/data1... 2026-02-18 23:52:00,338 INFO impl.FsDatasetImpl: Scanning block pool BP-1405556209-172.17.0.2-1771411918790 on volume /hadoop-3.3.1/target/test/data/dfs/data/data2... 2026-02-18 23:52:00,367 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-1405556209-172.17.0.2-1771411918790 on /hadoop-3.3.1/target/test/data/dfs/data/data1: 29ms 2026-02-18 23:52:00,380 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-1405556209-172.17.0.2-1771411918790 on /hadoop-3.3.1/target/test/data/dfs/data/data2: 42ms 2026-02-18 23:52:00,380 INFO impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1405556209-172.17.0.2-1771411918790: 43ms 2026-02-18 23:52:00,383 INFO hdfs.MiniDFSCluster: dnInfo.length != numDataNodes 2026-02-18 23:52:00,383 INFO hdfs.MiniDFSCluster: Waiting for cluster to become active 2026-02-18 23:52:00,383 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-1405556209-172.17.0.2-1771411918790 on volume /hadoop-3.3.1/target/test/data/dfs/data/data1... 2026-02-18 23:52:00,383 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-1405556209-172.17.0.2-1771411918790 on volume /hadoop-3.3.1/target/test/data/dfs/data/data2... 2026-02-18 23:52:00,383 INFO impl.BlockPoolSlice: Replica Cache file: /hadoop-3.3.1/target/test/data/dfs/data/data2/current/BP-1405556209-172.17.0.2-1771411918790/current/replicas doesn't exist 2026-02-18 23:52:00,383 INFO impl.BlockPoolSlice: Replica Cache file: /hadoop-3.3.1/target/test/data/dfs/data/data1/current/BP-1405556209-172.17.0.2-1771411918790/current/replicas doesn't exist 2026-02-18 23:52:00,385 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1405556209-172.17.0.2-1771411918790 on volume /hadoop-3.3.1/target/test/data/dfs/data/data2: 2ms 2026-02-18 23:52:00,385 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1405556209-172.17.0.2-1771411918790 on volume /hadoop-3.3.1/target/test/data/dfs/data/data1: 2ms 2026-02-18 23:52:00,385 INFO impl.FsDatasetImpl: Total time to add all replicas to map for block pool BP-1405556209-172.17.0.2-1771411918790: 4ms 2026-02-18 23:52:00,386 INFO checker.ThrottledAsyncChecker: Scheduling a check for /hadoop-3.3.1/target/test/data/dfs/data/data1 2026-02-18 23:52:00,395 INFO checker.DatasetVolumeChecker: Scheduled health check for volume /hadoop-3.3.1/target/test/data/dfs/data/data1 2026-02-18 23:52:00,397 INFO checker.ThrottledAsyncChecker: Scheduling a check for /hadoop-3.3.1/target/test/data/dfs/data/data2 2026-02-18 23:52:00,397 INFO checker.DatasetVolumeChecker: Scheduled health check for volume /hadoop-3.3.1/target/test/data/dfs/data/data2 2026-02-18 23:52:00,400 INFO datanode.VolumeScanner: Now scanning bpid BP-1405556209-172.17.0.2-1771411918790 on volume /hadoop-3.3.1/target/test/data/dfs/data/data2 2026-02-18 23:52:00,400 INFO datanode.VolumeScanner: Now scanning bpid BP-1405556209-172.17.0.2-1771411918790 on volume /hadoop-3.3.1/target/test/data/dfs/data/data1 2026-02-18 23:52:00,402 INFO datanode.VolumeScanner: VolumeScanner(/hadoop-3.3.1/target/test/data/dfs/data/data2, DS-2744ccfa-08be-4ed1-bcd2-e37a56aa1c94): finished scanning block pool BP-1405556209-172.17.0.2-1771411918790 2026-02-18 23:52:00,402 INFO datanode.VolumeScanner: VolumeScanner(/hadoop-3.3.1/target/test/data/dfs/data/data1, DS-d58a5194-b16f-4543-8266-2d369e6e46e3): finished scanning block pool BP-1405556209-172.17.0.2-1771411918790 2026-02-18 23:52:00,403 WARN datanode.DirectoryScanner: dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1 2026-02-18 23:52:00,403 INFO datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting in 1865923ms with interval of 21600000ms and throttle limit of -1ms/s 2026-02-18 23:52:00,411 INFO datanode.DataNode: Block pool BP-1405556209-172.17.0.2-1771411918790 (Datanode Uuid 96ea874a-e6d1-43e4-9890-1af139d74941) service to localhost/127.0.0.1:12222 beginning handshake with NN 2026-02-18 23:52:00,417 INFO datanode.VolumeScanner: VolumeScanner(/hadoop-3.3.1/target/test/data/dfs/data/data1, DS-d58a5194-b16f-4543-8266-2d369e6e46e3): no suitable block pools found to scan. Waiting 1814399983 ms. 2026-02-18 23:52:00,417 INFO datanode.VolumeScanner: VolumeScanner(/hadoop-3.3.1/target/test/data/dfs/data/data2, DS-2744ccfa-08be-4ed1-bcd2-e37a56aa1c94): no suitable block pools found to scan. Waiting 1814399983 ms. 2026-02-18 23:52:00,428 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:37107, datanodeUuid=96ea874a-e6d1-43e4-9890-1af139d74941, infoPort=42093, infoSecurePort=0, ipcPort=38627, storageInfo=lv=-57;cid=testClusterID;nsid=753147953;c=1771411918790) storage 96ea874a-e6d1-43e4-9890-1af139d74941 2026-02-18 23:52:00,430 INFO net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:37107 2026-02-18 23:52:00,430 INFO blockmanagement.BlockReportLeaseManager: Registered DN 96ea874a-e6d1-43e4-9890-1af139d74941 (127.0.0.1:37107). 2026-02-18 23:52:00,433 INFO datanode.DataNode: Block pool BP-1405556209-172.17.0.2-1771411918790 (Datanode Uuid 96ea874a-e6d1-43e4-9890-1af139d74941) service to localhost/127.0.0.1:12222 successfully registered with NN 2026-02-18 23:52:00,433 INFO datanode.DataNode: For namenode localhost/127.0.0.1:12222 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000 2026-02-18 23:52:00,447 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-d58a5194-b16f-4543-8266-2d369e6e46e3 for DN 127.0.0.1:37107 2026-02-18 23:52:00,448 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-2744ccfa-08be-4ed1-bcd2-e37a56aa1c94 for DN 127.0.0.1:37107 2026-02-18 23:52:00,477 INFO BlockStateChange: BLOCK* processReport 0xf023569c5bee4b89: Processing first storage report for DS-d58a5194-b16f-4543-8266-2d369e6e46e3 from datanode 96ea874a-e6d1-43e4-9890-1af139d74941 2026-02-18 23:52:00,480 INFO BlockStateChange: BLOCK* processReport 0xf023569c5bee4b89: from storage DS-d58a5194-b16f-4543-8266-2d369e6e46e3 node DatanodeRegistration(127.0.0.1:37107, datanodeUuid=96ea874a-e6d1-43e4-9890-1af139d74941, infoPort=42093, infoSecurePort=0, ipcPort=38627, storageInfo=lv=-57;cid=testClusterID;nsid=753147953;c=1771411918790), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2026-02-18 23:52:00,480 INFO BlockStateChange: BLOCK* processReport 0xf023569c5bee4b89: Processing first storage report for DS-2744ccfa-08be-4ed1-bcd2-e37a56aa1c94 from datanode 96ea874a-e6d1-43e4-9890-1af139d74941 2026-02-18 23:52:00,480 INFO BlockStateChange: BLOCK* processReport 0xf023569c5bee4b89: from storage DS-2744ccfa-08be-4ed1-bcd2-e37a56aa1c94 node DatanodeRegistration(127.0.0.1:37107, datanodeUuid=96ea874a-e6d1-43e4-9890-1af139d74941, infoPort=42093, infoSecurePort=0, ipcPort=38627, storageInfo=lv=-57;cid=testClusterID;nsid=753147953;c=1771411918790), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2026-02-18 23:52:00,502 INFO hdfs.MiniDFSCluster: Cluster is active 2026-02-18 23:52:00,506 INFO mapreduce.MiniHadoopClusterManager: Started MiniDFSCluster -- namenode on port 12222 2026-02-18 23:52:00,510 INFO datanode.DataNode: Successfully sent block report 0xf023569c5bee4b89 to namenode: localhost/127.0.0.1:12222, containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 4 msecs to generate and 44 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2026-02-18 23:52:00,511 INFO datanode.DataNode: Got finalize command for block pool BP-1405556209-172.17.0.2-1771411918790 2026-02-18 23:54:41,428 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 9 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 2 1 2026-02-18 23:54:41,450 INFO hdfs.StateChange: BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:37107 for /02923_hdfs_engine_size_virtual_column_test_faqxe414.data1.tsv 2026-02-18 23:54:41,487 INFO datanode.DataNode: Receiving BP-1405556209-172.17.0.2-1771411918790:blk_1073741825_1001 src: /127.0.0.1:41770 dest: /127.0.0.1:37107 2026-02-18 23:54:41,507 INFO DataNode.clienttrace: src: /127.0.0.1:41770, dest: /127.0.0.1:37107, bytes: 2, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.689793_count_2_pid_613_tid_140081997686336, offset: 0, srvID: 96ea874a-e6d1-43e4-9890-1af139d74941, blockid: BP-1405556209-172.17.0.2-1771411918790:blk_1073741825_1001, duration(ns): 6023462 2026-02-18 23:54:41,507 INFO datanode.DataNode: PacketResponder: BP-1405556209-172.17.0.2-1771411918790:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating 2026-02-18 23:54:41,508 INFO hdfs.StateChange: BLOCK* fsync: /02923_hdfs_engine_size_virtual_column_test_faqxe414.data1.tsv for libhdfs3_client_rand_0.689793_count_2_pid_613_tid_140081997686336 2026-02-18 23:54:41,514 INFO namenode.FSNamesystem: BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /02923_hdfs_engine_size_virtual_column_test_faqxe414.data1.tsv 2026-02-18 23:54:41,917 INFO hdfs.StateChange: DIR* completeFile: /02923_hdfs_engine_size_virtual_column_test_faqxe414.data1.tsv is closed by libhdfs3_client_rand_0.689793_count_2_pid_613_tid_140081997686336 2026-02-18 23:54:42,133 INFO hdfs.StateChange: BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:37107 for /02923_hdfs_engine_size_virtual_column_test_faqxe414.data2.tsv 2026-02-18 23:54:42,135 INFO datanode.DataNode: Receiving BP-1405556209-172.17.0.2-1771411918790:blk_1073741826_1002 src: /127.0.0.1:41778 dest: /127.0.0.1:37107 2026-02-18 23:54:42,141 INFO DataNode.clienttrace: src: /127.0.0.1:41778, dest: /127.0.0.1:37107, bytes: 3, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.774825_count_4_pid_613_tid_140082098398784, offset: 0, srvID: 96ea874a-e6d1-43e4-9890-1af139d74941, blockid: BP-1405556209-172.17.0.2-1771411918790:blk_1073741826_1002, duration(ns): 1652326 2026-02-18 23:54:42,142 INFO datanode.DataNode: PacketResponder: BP-1405556209-172.17.0.2-1771411918790:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating 2026-02-18 23:54:42,142 INFO hdfs.StateChange: BLOCK* fsync: /02923_hdfs_engine_size_virtual_column_test_faqxe414.data2.tsv for libhdfs3_client_rand_0.774825_count_4_pid_613_tid_140082098398784 2026-02-18 23:54:42,143 INFO hdfs.StateChange: DIR* completeFile: /02923_hdfs_engine_size_virtual_column_test_faqxe414.data2.tsv is closed by libhdfs3_client_rand_0.774825_count_4_pid_613_tid_140082098398784 2026-02-18 23:54:42,348 INFO hdfs.StateChange: BLOCK* allocate blk_1073741827_1003, replicas=127.0.0.1:37107 for /02923_hdfs_engine_size_virtual_column_test_faqxe414.data3.tsv 2026-02-18 23:54:42,350 INFO datanode.DataNode: Receiving BP-1405556209-172.17.0.2-1771411918790:blk_1073741827_1003 src: /127.0.0.1:41788 dest: /127.0.0.1:37107 2026-02-18 23:54:42,352 INFO DataNode.clienttrace: src: /127.0.0.1:41788, dest: /127.0.0.1:37107, bytes: 4, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.774825_count_6_pid_613_tid_140082090006080, offset: 0, srvID: 96ea874a-e6d1-43e4-9890-1af139d74941, blockid: BP-1405556209-172.17.0.2-1771411918790:blk_1073741827_1003, duration(ns): 1361536 2026-02-18 23:54:42,352 INFO datanode.DataNode: PacketResponder: BP-1405556209-172.17.0.2-1771411918790:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating 2026-02-18 23:54:42,353 INFO hdfs.StateChange: BLOCK* fsync: /02923_hdfs_engine_size_virtual_column_test_faqxe414.data3.tsv for libhdfs3_client_rand_0.774825_count_6_pid_613_tid_140082090006080 2026-02-18 23:54:42,355 INFO hdfs.StateChange: DIR* completeFile: /02923_hdfs_engine_size_virtual_column_test_faqxe414.data3.tsv is closed by libhdfs3_client_rand_0.774825_count_6_pid_613_tid_140082090006080 2026-02-19 00:05:30,880 INFO namenode.FSEditLog: Number of transactions: 20 Total time for transactions(ms): 10 Number of transactions batched in Syncs: 3 Number of syncs: 17 SyncTimes(ms): 3 2 2026-02-19 00:05:30,885 INFO hdfs.StateChange: BLOCK* allocate blk_1073741828_1004, replicas=127.0.0.1:37107 for /test_02725_1.tsv 2026-02-19 00:05:30,887 INFO datanode.DataNode: Receiving BP-1405556209-172.17.0.2-1771411918790:blk_1073741828_1004 src: /127.0.0.1:36170 dest: /127.0.0.1:37107 2026-02-19 00:05:30,891 INFO DataNode.clienttrace: src: /127.0.0.1:36170, dest: /127.0.0.1:37107, bytes: 6, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.875853_count_18_pid_613_tid_140098089748032, offset: 0, srvID: 96ea874a-e6d1-43e4-9890-1af139d74941, blockid: BP-1405556209-172.17.0.2-1771411918790:blk_1073741828_1004, duration(ns): 1595277 2026-02-19 00:05:30,891 INFO datanode.DataNode: PacketResponder: BP-1405556209-172.17.0.2-1771411918790:blk_1073741828_1004, type=LAST_IN_PIPELINE terminating 2026-02-19 00:05:30,891 INFO hdfs.StateChange: BLOCK* fsync: /test_02725_1.tsv for libhdfs3_client_rand_0.875853_count_18_pid_613_tid_140098089748032 2026-02-19 00:05:30,893 INFO hdfs.StateChange: DIR* completeFile: /test_02725_1.tsv is closed by libhdfs3_client_rand_0.875853_count_18_pid_613_tid_140098089748032 2026-02-19 00:05:31,135 INFO hdfs.StateChange: BLOCK* allocate blk_1073741829_1005, replicas=127.0.0.1:37107 for /test_02725_2.tsv 2026-02-19 00:05:31,137 INFO datanode.DataNode: Receiving BP-1405556209-172.17.0.2-1771411918790:blk_1073741829_1005 src: /127.0.0.1:36172 dest: /127.0.0.1:37107 2026-02-19 00:05:31,141 INFO DataNode.clienttrace: src: /127.0.0.1:36172, dest: /127.0.0.1:37107, bytes: 6, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.960885_count_20_pid_613_tid_140098089748032, offset: 0, srvID: 96ea874a-e6d1-43e4-9890-1af139d74941, blockid: BP-1405556209-172.17.0.2-1771411918790:blk_1073741829_1005, duration(ns): 1794054 2026-02-19 00:05:31,141 INFO datanode.DataNode: PacketResponder: BP-1405556209-172.17.0.2-1771411918790:blk_1073741829_1005, type=LAST_IN_PIPELINE terminating 2026-02-19 00:05:31,142 INFO hdfs.StateChange: BLOCK* fsync: /test_02725_2.tsv for libhdfs3_client_rand_0.960885_count_20_pid_613_tid_140098089748032 2026-02-19 00:05:31,143 INFO hdfs.StateChange: DIR* completeFile: /test_02725_2.tsv is closed by libhdfs3_client_rand_0.960885_count_20_pid_613_tid_140098089748032 2026-02-19 00:06:03,380 INFO hdfs.StateChange: BLOCK* allocate blk_1073741830_1006, replicas=127.0.0.1:37107 for /test_02536.jsonl 2026-02-19 00:06:03,383 INFO datanode.DataNode: Receiving BP-1405556209-172.17.0.2-1771411918790:blk_1073741830_1006 src: /127.0.0.1:51088 dest: /127.0.0.1:37107 2026-02-19 00:06:03,386 INFO DataNode.clienttrace: src: /127.0.0.1:51088, dest: /127.0.0.1:37107, bytes: 27, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.681923_count_29_pid_613_tid_140082090006080, offset: 0, srvID: 96ea874a-e6d1-43e4-9890-1af139d74941, blockid: BP-1405556209-172.17.0.2-1771411918790:blk_1073741830_1006, duration(ns): 1492906 2026-02-19 00:06:03,386 INFO datanode.DataNode: PacketResponder: BP-1405556209-172.17.0.2-1771411918790:blk_1073741830_1006, type=LAST_IN_PIPELINE terminating 2026-02-19 00:06:03,386 INFO hdfs.StateChange: BLOCK* fsync: /test_02536.jsonl for libhdfs3_client_rand_0.681923_count_29_pid_613_tid_140082090006080 2026-02-19 00:06:03,388 INFO hdfs.StateChange: DIR* completeFile: /test_02536.jsonl is closed by libhdfs3_client_rand_0.681923_count_29_pid_613_tid_140082090006080 2026-02-19 00:10:20,373 INFO namenode.FSEditLog: Number of transactions: 38 Total time for transactions(ms): 12 Number of transactions batched in Syncs: 6 Number of syncs: 32 SyncTimes(ms): 3 2 2026-02-19 00:10:20,377 INFO hdfs.StateChange: BLOCK* allocate blk_1073741831_1007, replicas=127.0.0.1:37107 for /test_1.tsv 2026-02-19 00:10:20,380 INFO datanode.DataNode: Receiving BP-1405556209-172.17.0.2-1771411918790:blk_1073741831_1007 src: /127.0.0.1:54854 dest: /127.0.0.1:37107 2026-02-19 00:10:20,384 INFO DataNode.clienttrace: src: /127.0.0.1:54854, dest: /127.0.0.1:37107, bytes: 6, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.535263_count_36_pid_613_tid_140082090006080, offset: 0, srvID: 96ea874a-e6d1-43e4-9890-1af139d74941, blockid: BP-1405556209-172.17.0.2-1771411918790:blk_1073741831_1007, duration(ns): 1965660 2026-02-19 00:10:20,384 INFO datanode.DataNode: PacketResponder: BP-1405556209-172.17.0.2-1771411918790:blk_1073741831_1007, type=LAST_IN_PIPELINE terminating 2026-02-19 00:10:20,384 INFO hdfs.StateChange: BLOCK* fsync: /test_1.tsv for libhdfs3_client_rand_0.535263_count_36_pid_613_tid_140082090006080 2026-02-19 00:10:20,386 INFO hdfs.StateChange: DIR* completeFile: /test_1.tsv is closed by libhdfs3_client_rand_0.535263_count_36_pid_613_tid_140082090006080 2026-02-19 00:10:20,416 INFO hdfs.StateChange: BLOCK* allocate blk_1073741832_1008, replicas=127.0.0.1:37107 for /test_2.tsv 2026-02-19 00:10:20,419 INFO datanode.DataNode: Receiving BP-1405556209-172.17.0.2-1771411918790:blk_1073741832_1008 src: /127.0.0.1:54860 dest: /127.0.0.1:37107 2026-02-19 00:10:20,423 INFO DataNode.clienttrace: src: /127.0.0.1:54860, dest: /127.0.0.1:37107, bytes: 6, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.535263_count_38_pid_613_tid_140082090006080, offset: 0, srvID: 96ea874a-e6d1-43e4-9890-1af139d74941, blockid: BP-1405556209-172.17.0.2-1771411918790:blk_1073741832_1008, duration(ns): 2946290 2026-02-19 00:10:20,424 INFO datanode.DataNode: PacketResponder: BP-1405556209-172.17.0.2-1771411918790:blk_1073741832_1008, type=LAST_IN_PIPELINE terminating 2026-02-19 00:10:20,424 INFO hdfs.StateChange: BLOCK* fsync: /test_2.tsv for libhdfs3_client_rand_0.535263_count_38_pid_613_tid_140082090006080 2026-02-19 00:10:20,426 INFO hdfs.StateChange: DIR* completeFile: /test_2.tsv is closed by libhdfs3_client_rand_0.535263_count_38_pid_613_tid_140082090006080 2026-02-19 00:10:20,436 INFO hdfs.StateChange: BLOCK* allocate blk_1073741833_1009, replicas=127.0.0.1:37107 for /test_3.tsv 2026-02-19 00:10:20,438 INFO datanode.DataNode: Receiving BP-1405556209-172.17.0.2-1771411918790:blk_1073741833_1009 src: /127.0.0.1:54872 dest: /127.0.0.1:37107 2026-02-19 00:10:20,442 INFO DataNode.clienttrace: src: /127.0.0.1:54872, dest: /127.0.0.1:37107, bytes: 6, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.535263_count_40_pid_613_tid_140082090006080, offset: 0, srvID: 96ea874a-e6d1-43e4-9890-1af139d74941, blockid: BP-1405556209-172.17.0.2-1771411918790:blk_1073741833_1009, duration(ns): 2210539 2026-02-19 00:10:20,443 INFO datanode.DataNode: PacketResponder: BP-1405556209-172.17.0.2-1771411918790:blk_1073741833_1009, type=LAST_IN_PIPELINE terminating 2026-02-19 00:10:20,443 INFO hdfs.StateChange: BLOCK* fsync: /test_3.tsv for libhdfs3_client_rand_0.535263_count_40_pid_613_tid_140082090006080 2026-02-19 00:10:20,444 INFO hdfs.StateChange: DIR* completeFile: /test_3.tsv is closed by libhdfs3_client_rand_0.535263_count_40_pid_613_tid_140082090006080 2026-02-19 00:23:06,352 INFO datanode.DirectoryScanner: Scan Results: BlockPool BP-1405556209-172.17.0.2-1771411918790 Total blocks: 9, missing metadata files: 0, missing block files: 0, missing blocks in memory: 0, mismatched blocks: 0